Maker Pro
Maker Pro

PSoC Express: Does it work for semi-analog designs?

J

jasen

So now they have a tool that they can claim eliminates writing code. I
have only seen that once before in my career and that was a full page
ad in Byte magazine some 20+ years ago. I never saw anything further
from that company. :^)

kind of like a PLC for analogue then? that's a neat idea.

Bye.
Jasen
 
N

Nico Coesel

Henry Kiefer said:
I heart of the 8051 version but I think it is not very useful to implant a
no more powerful core. You can use the Keil compiler then, which is a very
effective but expensive system. Maybe that is a real marketing decision just
to reach interested people the first time (via Google...).

Adding 2 16 bit numbers from memory takes at least 16 instructions on
an 8051 which take 4 to 12 clock cycles each. That would add up to 64
to 192 clock cycles. Any modern 16 (or 32) bit processor can do such
an operation in 3 instructions taking about 6 clock cycles. A compiler
can't break the rules of physics even if it is from Keil.
One thing I'm really interested in is the question, how they integrate a
core with a very fine structure AND at the same time will improve the analog
system. That is paradox because the linearity of chips is bounded to have
thicker structures. At 90nm you can forget analog linearity.

I suppose thay found a way to mix both on one chip.
 
H

Henry Kiefer

Nico Coesel said:
Adding 2 16 bit numbers from memory takes at least 16 instructions on
an 8051 which take 4 to 12 clock cycles each. That would add up to 64
to 192 clock cycles. Any modern 16 (or 32) bit processor can do such
an operation in 3 instructions taking about 6 clock cycles. A compiler
can't break the rules of physics even if it is from Keil.

There is a big difference between hand-optimizing assembler code and
high-level programming in C.
The Keil compiler is the best I ever seen. It's output is very close to
hand-optimized code.

The M8C compiler does his job reasonable but without effort :)
And I don't think this compiler will get any more major upgrade because the
market is to small and newer cores will soon arive. The compiler problem is
surely one issue for Cypress to change the core in general.
BTW: The compiler is integrated in PSoC Express for free but needs a license
in PSoC Designer.
I suppose thay found a way to mix both on one chip.

That would be interesting. Time to do a patent application search...

Hopefully not in a way for example Altera made the on-chip voltage regulator
for their CPLDs to run inside with 3.3 volts and external having 5 volts.
This regulator dissipates so much they distributed it around the core in a
ring structure.

- Henry
 
N

Nico Coesel

Henry Kiefer said:
There is a big difference between hand-optimizing assembler code and
high-level programming in C.
The Keil compiler is the best I ever seen. It's output is very close to
hand-optimized code.

Look at GCC, be amazed and then compare the prices. Any modern C
compiler will output code which cannot be optimized much further
unless the input is very sloppy. However, the 8051 is an extremely
difficult cpu to optimise. A horse is always faster than a tuned-up
dynosaur. If Cypress choose the ARM core to start with, they could
have done some serious DSP to start with.
That would be interesting. Time to do a patent application search...

Its probably just a matter of running one piece of silicon through 2
types of processes.
 
J

Joel Kolstad

Nico Coesel said:
Look at GCC, be amazed and then compare the prices.

In general I think GCC is a great program and obviously all the more so for
being free, but when I used it a couple years ago for an AVR the output was
pretty poor compared to what a decent assembly language programmer would
create. I queried some people about this, and they said it's primarily a
result of GCC originally being target for "big iron" (32 bit+) machines and a
fair amount of loss in efficiency when being scaled down to a little 8 bit
architecture.

That being said, the code I was doing wasn't time critical, I had plenty of
memory, so using GCC was definitely a "win."

I've been quite impressed with the C compilers from Rowley Associates
(www.rowley.co.uk); back when I did some MSP430 work his tools produced the
fastest code around, regardless of compiler price (the main competition being
IAR Systems, which is certainly a decent compiler itself, but *far* spendier
and just not *quite* as good as Rowley's).

---Joel
 
R

rickman

Nico said:
Adding 2 16 bit numbers from memory takes at least 16 instructions on
an 8051 which take 4 to 12 clock cycles each. That would add up to 64
to 192 clock cycles. Any modern 16 (or 32) bit processor can do such
an operation in 3 instructions taking about 6 clock cycles. A compiler
can't break the rules of physics even if it is from Keil.

Your numbers are interesting, but there are 8051 clones that use a
single clock cycle for many instructions. I seriously doubt that
Cypress will use a 12 clock version of the 8051 in any new design
regardless of what they are using on the current chips they make.
 
R

rickman

Joerg said:
None AFAIK. That's why the 8051 architecture remains so popular. Plus
you can find a local code expert almost anywhere.

I don't think the 8051 owes its popularity to the fact that you can get
it as a second source in the literal sense. Second sourcing is nearly
dead. However, being able to port your code to different versions of
the same core is very popular. That is why the ARM is taking off. No
one seriously thinks that they can plop another chip from a different
vendor into the same socket in the ARM world, and yet the ARM is the
CPU of choice for most applications.

There are a few people who like the idea of second source, but most
prefer a good first source.

Size is, surprisingly, a decreasing concern. One of my designs this year
had to be smaller than a couple of postage stamps. Yet I did it all
analog with around 60 parts. Going to 0402 was the answer and there
would have been even smaller parts available. A uC solution would have
cost more. So yeah, if you can save the 10c part that's fine but if that
requires a uC that costs 25c more it's just not feasible.

The fact that you have a design that was done better in analog does not
mean that the high degree of integration in a PSOC is not an advantage.
That is the main selling feature of these devices; one chip replaces
so many other parts that it is a major cost savings just because of the
reduction in assembly costs. With more and more designs trying to fit
into a couple of postage stamps, the overall size of the design is more
important than it ever was.

Still a "one button tool" is nice as long as there is an open path
towards more intricate design. Think of it like you do about Excel. This
is a rather simple tool that let's you key in almost any formula just as
it comes to mind. Some programmer has already implemented the code that
executes it and we trust that process for the most part. If it's not
enough we can still fire up the C-compiler but in 95% plus of cases we
don't need to. All my biz book-keeping is done in database. Took less
than half a day to set up. In C that would have taken weeks.

It is nice if it really works and does not give headaches. The problem
with the PSOC is that it is a real bear to dig in and learn the
details. Instead of putting effort into instructional materials,
Cypress has come out with a tool to make the design "idiot" proof. But
then with every flaw in the tool you are left to dig into the design
and learn all the stuff you were being shielded from. This can lead to
schedule and performance problems. That says nothing about the issue
of learning enough about the parts to even know if they will do the job
at hand!
 
C

CBFalconer

Joel said:
In general I think GCC is a great program and obviously all the
more so for being free, but when I used it a couple years ago for
an AVR the output was pretty poor compared to what a decent
assembly language programmer would create. I queried some people
about this, and they said it's primarily a result of GCC originally
being target for "big iron" (32 bit+) machines and a fair amount of
loss in efficiency when being scaled down to a little 8 bit
architecture.

The code generation is a separate back-end module. You, and anyone
else, are perfectly free to improve it. You don't have to build a
compiler. Having done the improvements they will affect any
language you use, e.g. Ada, Fortran, even C++ etc. Unfortunately
Pascal has not yet been properly integrated into the front-end.
 
K

Klaus Kragelund

Joerg said:
Klaus Kragelund wrote:


[...]
I have been working with the PSOC for the last 6 monts now. My
recommendation to learn the ins and outs of this device is to lock
yourself into the lab for a week or two to get a prototype up and
running. The key is to dwelve into the PSOC, the seminars are no use
since they are to superficial. When you get into trouble use the
PSOCDeveloper.com forum. Its great. Moreover DON'T use the
"Sublimation" and "Condensation" modes of the compiler. The compiler is
buggy and these optimization functions simply dont work (I learned the
hard way tracking down a bug for two days to find it was just the
checkmark in the compiler options that was the culprit)

Thanks for that info. This can prevent hours of frustration.

The analog functions are ok - I hope they wil be even better with the
PSOC3

Regarding price, our production takes about 2cents to place an SMD
component. ...


2c just for placement? Are you guys still manufacturing in Scandinavia?
Maybe they should ease up on the taxes over there :)

Yes, still production in denmark. But we have some production in asia,
however this seems to take almost the same price. What is you price of
a SMD placement (my number is all included: Machine time, operator
time, costs of machine, costs of production space, heating,
electricity, overhead of the fabric, etc)?

Regards

Klaus
 
N

Nico Coesel

rickman said:
Your numbers are interesting, but there are 8051 clones that use a
single clock cycle for many instructions. I seriously doubt that
Cypress will use a 12 clock version of the 8051 in any new design
regardless of what they are using on the current chips they make.

This still leaves 16 instructions versus 3 in the best case.
 
N

Nico Coesel

Joel Kolstad said:
In general I think GCC is a great program and obviously all the more so for
being free, but when I used it a couple years ago for an AVR the output was
pretty poor compared to what a decent assembly language programmer would
create. I queried some people about this, and they said it's primarily a
result of GCC originally being target for "big iron" (32 bit+) machines and a
fair amount of loss in efficiency when being scaled down to a little 8 bit
architecture.

I think it has more to do with the AVR architecture not being
supported very well.
That being said, the code I was doing wasn't time critical, I had plenty of
memory, so using GCC was definitely a "win."

I've been quite impressed with the C compilers from Rowley Associates
(www.rowley.co.uk); back when I did some MSP430 work his tools produced the
fastest code around, regardless of compiler price (the main competition being
IAR Systems, which is certainly a decent compiler itself, but *far* spendier
and just not *quite* as good as Rowley's).

Rowley probably did a good job on the MSP430 but for ARM based
controllers they provide GCC with their own libraries.
 
J

Joerg

rickman said:
I don't think the 8051 owes its popularity to the fact that you can get
it as a second source in the literal sense. Second sourcing is nearly
dead. ...


Not in my line of business. Most clients really emphasize their desire
to be able to multi-source. This also means that companies that
deliberately make their products non pin-compatible are really shooting
themselves in the foot because guys like me never consider their products.

... However, being able to port your code to different versions of
the same core is very popular. That is why the ARM is taking off. No
one seriously thinks that they can plop another chip from a different
vendor into the same socket in the ARM world, and yet the ARM is the
CPU of choice for most applications.

True, ease of porting is a main concern and so is availability of local
code writers. The latter is where the 8051 (so far) wins hands down.

There are a few people who like the idea of second source, but most
prefer a good first source.





The fact that you have a design that was done better in analog does not
mean that the high degree of integration in a PSOC is not an advantage.
That is the main selling feature of these devices; one chip replaces
so many other parts that it is a major cost savings just because of the
reduction in assembly costs. With more and more designs trying to fit
into a couple of postage stamps, the overall size of the design is more
important than it ever was.

PSoC are IMHO indeed quite competitively priced. However, it is amazing
how cheap assembly becomes when you have it done in huge batches. A
while ago I wanted to see if a client could benefit from migrating one
of my designs to a uC where the uC could take over about 50% or more.
They told me the old analog designs costs $2.50, including circuit
board, final test, packaging and shipping share. Tons of resistors,
caps, transistors etc. Blew me away and I put that uC idea back into the
drawer. The uC itself would have cost around $1.65 in large qties and
that was after quite some negotiating.
It is nice if it really works and does not give headaches. The problem
with the PSOC is that it is a real bear to dig in and learn the
details. Instead of putting effort into instructional materials,
Cypress has come out with a tool to make the design "idiot" proof. But
then with every flaw in the tool you are left to dig into the design
and learn all the stuff you were being shielded from. This can lead to
schedule and performance problems. That says nothing about the issue
of learning enough about the parts to even know if they will do the job
at hand!

Yes. And this learning curve has to be worth it. IOW in my case I'd have
to be able to port some designs and create revenue from that effort
while my clients need to see some iron-clad cost savings. Of course,
these savings must include the amortization of my fees but that's no
problem for mass products.
 
J

Joerg

Klaus said:
Joerg said:
Klaus Kragelund wrote:


[...]

I have been working with the PSOC for the last 6 monts now. My
recommendation to learn the ins and outs of this device is to lock
yourself into the lab for a week or two to get a prototype up and
running. The key is to dwelve into the PSOC, the seminars are no use
since they are to superficial. When you get into trouble use the
PSOCDeveloper.com forum. Its great. Moreover DON'T use the
"Sublimation" and "Condensation" modes of the compiler. The compiler is
buggy and these optimization functions simply dont work (I learned the
hard way tracking down a bug for two days to find it was just the
checkmark in the compiler options that was the culprit)

Thanks for that info. This can prevent hours of frustration.


The analog functions are ok - I hope they wil be even better with the
PSOC3

Regarding price, our production takes about 2cents to place an SMD
component. ...


2c just for placement? Are you guys still manufacturing in Scandinavia?
Maybe they should ease up on the taxes over there :)


Yes, still production in denmark. But we have some production in asia,
however this seems to take almost the same price. What is you price of
a SMD placement (my number is all included: Machine time, operator
time, costs of machine, costs of production space, heating,
electricity, overhead of the fabric, etc)?

Plus welfare taxes, a few years paid maternity leave, long vacations,
some more taxes ... SCNR.

Don't know the latest because that's what my clients handle. When
designing I calculate with well under $0.01 per simple part for Chinese
production. Also, one has to make sure that the parts variety is as
small as possible. That's why you see so many 100k resistors in my
designs :)

This is a couple years old but look at figure 2 on page 27:
Coincidentally they have Scandinavia on this one. That can really give
you the goose pimples. WRT to automated assembly I found that Europe and
the US are often a bit higher than it seems from that diagram.

They will generally not quote you on a per part basis but want the whole
BOM first and then submit a total. The per part cost is often a closely
guarded number but after receiving the bids it's quite simple math.

Another thing that helps is that I never do layouts myself. I leave that
to the experts and the local one here is actually quite a bit older than
I am. For good reason. He knows how to keep the costs down because a
layout that isn't optimal for SMT machines is going to jack up the cost
even if the plant initially bids low to get in.
 
D

David Brown

Joel said:
In general I think GCC is a great program and obviously all the more so for
being free, but when I used it a couple years ago for an AVR the output was
pretty poor compared to what a decent assembly language programmer would
create. I queried some people about this, and they said it's primarily a
result of GCC originally being target for "big iron" (32 bit+) machines and a
fair amount of loss in efficiency when being scaled down to a little 8 bit
architecture.

Being a general purpose C compiler rather than a specifically written
8-bit C compiler, gcc works much better on cpus that are well suited to
C - the main features being at least 16-bit wide registers and ALU,
plenty of registers, single memory space, and several pointer registers.
The AVR scores 1 out of 4, and is therefore a poor match for C
(although better than most 8-bitters). The msp430 and the ARM, in
comparison, are very good matches.

One of the advantages of gcc is that any port will benefit from
front-end features and optimisations, at which gcc is way ahead of most
other compilers. But at the back-end, it will vary according to the
suitability of the architecture, and how much time and effort has been
put into the code generator and optimiser. Thus it will vary a lot from
target to target, and also over time - modern avr gcc releases are much
better than older ones (the same applies to arm gcc). Still, the
architecture of gcc is always going to cause limitations when porting to
devices like the AVR - the very best compilers for 8-bit devices have to
be designed for the job from start to finish.
That being said, the code I was doing wasn't time critical, I had plenty of
memory, so using GCC was definitely a "win."

I've been quite impressed with the C compilers from Rowley Associates
(www.rowley.co.uk); back when I did some MSP430 work his tools produced the
fastest code around, regardless of compiler price (the main competition being
IAR Systems, which is certainly a decent compiler itself, but *far* spendier
and just not *quite* as good as Rowley's).

As I understand it, Rowley's msp430 compiler is their own, while their
arm compiler is gcc.
 
K

Klaus Kragelund

Joerg skrev:
Klaus said:
Joerg said:
Klaus Kragelund wrote:


[...]


I have been working with the PSOC for the last 6 monts now. My
recommendation to learn the ins and outs of this device is to lock
yourself into the lab for a week or two to get a prototype up and
running. The key is to dwelve into the PSOC, the seminars are no use
since they are to superficial. When you get into trouble use the
PSOCDeveloper.com forum. Its great. Moreover DON'T use the
"Sublimation" and "Condensation" modes of the compiler. The compiler is
buggy and these optimization functions simply dont work (I learned the
hard way tracking down a bug for two days to find it was just the
checkmark in the compiler options that was the culprit)


Thanks for that info. This can prevent hours of frustration.



The analog functions are ok - I hope they wil be even better with the
PSOC3

Regarding price, our production takes about 2cents to place an SMD
component. ...


2c just for placement? Are you guys still manufacturing in Scandinavia?
Maybe they should ease up on the taxes over there :)


Yes, still production in denmark. But we have some production in asia,
however this seems to take almost the same price. What is you price of
a SMD placement (my number is all included: Machine time, operator
time, costs of machine, costs of production space, heating,
electricity, overhead of the fabric, etc)?

Plus welfare taxes, a few years paid maternity leave, long vacations,
some more taxes ... SCNR.

Don't know the latest because that's what my clients handle. When
designing I calculate with well under $0.01 per simple part for Chinese
production. Also, one has to make sure that the parts variety is as
small as possible. That's why you see so many 100k resistors in my
designs :)

This is a couple years old but look at figure 2 on page 27:
Coincidentally they have Scandinavia on this one. That can really give
you the goose pimples. WRT to automated assembly I found that Europe and
the US are often a bit higher than it seems from that diagram.

They will generally not quote you on a per part basis but want the whole
BOM first and then submit a total. The per part cost is often a closely
guarded number but after receiving the bids it's quite simple math.

Another thing that helps is that I never do layouts myself. I leave that
to the experts and the local one here is actually quite a bit older than
I am. For good reason. He knows how to keep the costs down because a
layout that isn't optimal for SMT machines is going to jack up the cost
even if the plant initially bids low to get in.

Well, part of the reason why ours qoutes from Asia is so expensive is
that we have to approve the production site according to enviromental
conserns, workforce working conditions, Q constraints and the financial
capability of the company. If we chose a manufactor that used
child-labor we should do cheaper. Can you recommend one of those big
houses your customers use or is that confidential information?

Regards

Klaus
 
R

rickman

Joerg said:
Not in my line of business. Most clients really emphasize their desire
to be able to multi-source. This also means that companies that
deliberately make their products non pin-compatible are really shooting
themselves in the foot because guys like me never consider their products.

That is interesting. I guess there are apps where it is more important
to multi-source than it is to get the best price. I have seen this a
lot in the FPGA world where it is pretty much impossible to second
source a part on a board. But in terms of the HDL, many companies
write their code in the most generic way possible so that it can be
retargeted to a different line or even brand of chips. That means you
can't take advantage of any of the architecture specific features that
could save a lot of real estate and therefore $$$. The trade off is
that you can often beat down the price on both brands since they know
the socket is never a lock. Of course the FPGA vendors say optimizing
for one part is the way to save $$$. I have not seen a clear tradeoff
since the other side of the coin has to do with price negotiation. I
have been told that there is *no* minimum selling price on FPGAs...
:^)

There is likely to be something similar to MCUs. If you work only with
generic 8051 clones, you may get the lowest price for that part since
they have to compete in a commodity market. But another part using a
different CPU may well be a better fit for the design and end up being
a better fit and so have a lower price than the generic part. I can't
say I have studied this much, but my current designs are suffering from
being hard to find the perfect part. If I do find an MCU that does
everything I need, it will save me lots of trouble as well as board
space and cost. I won't care much which CPU it is although I greatly
prefer ARMs at this point. There is nearly no cost or size penalty for
ARMs any more. In quantity you can get them for under $1.

True, ease of porting is a main concern and so is availability of local
code writers. The latter is where the 8051 (so far) wins hands down.

Why do you need software folks that "know" the 8051? Every brand of
MCU has different peripherals and each one has a learning curve. The
instuction set is pretty much irrelevant for 99.9% of the work you do
on most programs.

Personally, I see the Cortex-M3 as displacing both the 8 bit MCUs and
the ARM7 over the next year or two. I still don't have specific info
on the power consumption potential of the new CM3 core, but all
indications are that it should beat the ARM7 which currently beats many
8 bit devices. The CM3 has better code density and speed than the ARM7
and should do a pretty good job competing against the 8 bitters on
nearly all fronts. With smaller process features the size of the core
is becoming insignificant compared to the memory and peripherals.

With Cypress designing a new PSOC3 family around the CM3, I expect this
to be a very powerful combination. I can't imagine the 8051 version
being significantly cheaper. I expect it is being offered as a
"comfort" factor for the people who are wed to that family. We'll see
when the parts are officially announced.
 
H

Hal Murray

Still, the
architecture of gcc is always going to cause limitations when porting to
devices like the AVR - the very best compilers for 8-bit devices have to
be designed for the job from start to finish.

I'm not a compiler guru. What would you do different in the front end
if you knew the target would be an 8-bit cpu?
 
D

David Brown

Hal said:
Still, the

I'm not a compiler guru. What would you do different in the front end
if you knew the target would be an 8-bit cpu?

I'm not a compiler guru either, but have used a lot of compilers for a
lot of targets over the years, and seen a lot of generated code.

Much of the C specification is based on the assumption that "int" is the
fastest type to work with, and an "int" is required to be at least 16
bits. This means that there are many places in C were data is forced to
be 16-bit signed ints, even though the target may be faster at 8-bit
data (and many small 8-bitters are also faster at unsigned rather than
signed for some operations). For example, handling of character
constants, switches, arithmetic operations - there are many cases of
"int promotion". Some of these can be spotted and removed at the
back-end, to avoid unnecessary instructions, but there can often be
remnants of inefficient int promotion. On some 8-bit compilers, the
tendency of the front-end to use ints means that pairs of registers are
allocated instead of single registers. Even if peephole optimisation on
the back-end removes the extra instructions, you lose the benefits of
these wasted registers that are no longer available for other data.
This sort of thing requires tighter cooperation between the front-end
and back-end of the compiler.

Another example is that the cost of function calls can be quite high on
an AVR - accessed to stacked data is either slow (for compilers using
the return stack for data) or wastes a precious pointer register (for
compilers using a pointer to a separate data stack). Compilers with a
powerful enough front-end for global inter-procedural optimisations can
eliminate a substantial part of this overhead through automatic inlining
across the whole program. gcc 4.1 supports such optimisations, but not
earlier versions - more aggressive dedicated small micro compilers like
ByteCraft make heavy use of such optimisations.

mvh.,

David
 
J

Joerg

rickman said:
That is interesting. I guess there are apps where it is more important
to multi-source than it is to get the best price. I have seen this a
lot in the FPGA world where it is pretty much impossible to second
source a part on a board. But in terms of the HDL, many companies
write their code in the most generic way possible so that it can be
retargeted to a different line or even brand of chips. That means you
can't take advantage of any of the architecture specific features that
could save a lot of real estate and therefore $$$. The trade off is
that you can often beat down the price on both brands since they know
the socket is never a lock. Of course the FPGA vendors say optimizing
for one part is the way to save $$$. I have not seen a clear tradeoff
since the other side of the coin has to do with price negotiation. I
have been told that there is *no* minimum selling price on FPGAs...
:^)

There is likely to be something similar to MCUs. If you work only with
generic 8051 clones, you may get the lowest price for that part since
they have to compete in a commodity market. But another part using a
different CPU may well be a better fit for the design and end up being
a better fit and so have a lower price than the generic part. I can't
say I have studied this much, but my current designs are suffering from
being hard to find the perfect part. If I do find an MCU that does
everything I need, it will save me lots of trouble as well as board
space and cost. I won't care much which CPU it is although I greatly
prefer ARMs at this point. There is nearly no cost or size penalty for
ARMs any more. In quantity you can get them for under $1.

I am still hoping they come out with a few generic ARMs some day so we
can enjoy the same design security as with the 8051.
Why do you need software folks that "know" the 8051? Every brand of
MCU has different peripherals and each one has a learning curve. The
instuction set is pretty much irrelevant for 99.9% of the work you do
on most programs.

It's a matter of getting up to speed. Those folks come with their
laptops and the Keil compiler all set up. Most of the time we found
someone within less than 1/2hr driving distance. They know the nooks,
crannies and pathologies of a lot of the 8051 versions, all the register
names and so on. All that does help if you've got only a few days to try
something new.

Most of the time with my designs I have to do something "weird". For
example, back in the early 90's the concept of stopping a uC clock would
make people's toe nails curl but I just had to on an 80C89.

Personally, I see the Cortex-M3 as displacing both the 8 bit MCUs and
the ARM7 over the next year or two. I still don't have specific info
on the power consumption potential of the new CM3 core, but all
indications are that it should beat the ARM7 which currently beats many
8 bit devices. The CM3 has better code density and speed than the ARM7
and should do a pretty good job competing against the 8 bitters on
nearly all fronts. With smaller process features the size of the core
is becoming insignificant compared to the memory and peripherals.

With Cypress designing a new PSOC3 family around the CM3, I expect this
to be a very powerful combination. I can't imagine the 8051 version
being significantly cheaper. I expect it is being offered as a
"comfort" factor for the people who are wed to that family. We'll see
when the parts are officially announced.

Yes, we'll have to see. At least for my usual area of work it's not so
much the uC challenging the PSoC in terms of cost. It is the remarkably
low cost of plain old discrete circuitry when assembled in Asia. The
only way to beat it would be if the PSoC can literally take over 100% of
the functionality and every time I looked they couldn't. But I'll keep
looking :)

Many times the designs are later migrated to ASIC. IOW clients jump over
the intermediate step of PSoC. Another one of those projects is coming
down the pike end of next week. It almost has to go ASIC unless PSoC
could be had for 10c-15c a pop (this one will be a disposable).
 
H

Hal Murray

I am still hoping they come out with a few generic ARMs some day so we
can enjoy the same design security as with the 8051.

I haven't worked with 8051s. The ARMs I've worked with come
with various on-chip bells and whistles: counters/timers/PWMs,
serial ports, ADCs, SPI... Do you use 8051s with that sort
of things and/or are there a few combinations that everybody
makes and with the same pinout?

Many times the designs are later migrated to ASIC.

How is the second source like in that area?
 
Top